216 research outputs found

    Low-Spin Spectroscopy of 50Mn

    Get PDF
    The data on low spin states in the odd-odd nucleus 50Mn investigated with the 50Cr(p,ngamma)50Mn fusion evaporation reaction at the FN-TANDEM accelerator in Cologne are reported. Shell model and collective rotational model interpretations of the data are given.Comment: 7 pages, 2 figures, to be published in the proceedings of the "Bologna 2000 - Structure of the Nucleus at the Dawn of the Century" Conference, (Bologna, Italy, May 29 - June 3, 2000

    Evolution of communication signals and information during species radiation

    Get PDF
    Communicating species identity is a key component of many animal signals. However, whether selection for species recognition systematically increases signal diversity during clade radiation remains debated. Here we show that in woodpecker drumming, a rhythmic signal used during mating and territorial defense, the amount of species identity information encoded remained stable during woodpeckers’ radiation. Acoustic analyses and evolutionary reconstructions show interchange among six main drumming types despite strong phylogenetic contingencies, suggesting evolutionary tinkering of drumming structure within a constrained acoustic space. Playback experiments and quantification of species discriminability demonstrate sufficient signal differentiation to support species recognition in local communities. Finally, we only find character displacement in the rare cases where sympatric species are also closely related. Overall, our results illustrate how historical contingencies and ecological interactions can promote conservatism in signals during a clade radiation without impairing the effectiveness of information transfer relevant to inter-specific discrimination

    Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night

    Full text link
    Deep learning techniques have enabled the emergence of state-of-the-art models to address object detection tasks. However, these techniques are data-driven, delegating the accuracy to the training dataset which must resemble the images in the target task. The acquisition of a dataset involves annotating images, an arduous and expensive process, generally requiring time and manual effort. Thus, a challenging scenario arises when the target domain of application has no annotated dataset available, making tasks in such situation to lean on a training dataset of a different domain. Sharing this issue, object detection is a vital task for autonomous vehicles where the large amount of driving scenarios yields several domains of application requiring annotated data for the training process. In this work, a method for training a car detection system with annotated data from a source domain (day images) without requiring the image annotations of the target domain (night images) is presented. For that, a model based on Generative Adversarial Networks (GANs) is explored to enable the generation of an artificial dataset with its respective annotations. The artificial dataset (fake dataset) is created translating images from day-time domain to night-time domain. The fake dataset, which comprises annotated images of only the target domain (night images), is then used to train the car detector model. Experimental results showed that the proposed method achieved significant and consistent improvements, including the increasing by more than 10% of the detection performance when compared to the training with only the available annotated data (i.e., day images).Comment: 8 pages, 8 figures, https://github.com/viniciusarruda/cross-domain-car-detection and accepted at IJCNN 201

    Effortless Deep Training for Traffic Sign Detection Using Templates and Arbitrary Natural Images

    Full text link
    Deep learning has been successfully applied to several problems related to autonomous driving. Often, these solutions rely on large networks that require databases of real image samples of the problem (i.e., real world) for proper training. The acquisition of such real-world data sets is not always possible in the autonomous driving context, and sometimes their annotation is not feasible (e.g., takes too long or is too expensive). Moreover, in many tasks, there is an intrinsic data imbalance that most learning-based methods struggle to cope with. It turns out that traffic sign detection is a problem in which these three issues are seen altogether. In this work, we propose a novel database generation method that requires only (i) arbitrary natural images, i.e., requires no real image from the domain of interest, and (ii) templates of the traffic signs, i.e., templates synthetically created to illustrate the appearance of the category of a traffic sign. The effortlessly generated training database is shown to be effective for the training of a deep detector (such as Faster R-CNN) on German traffic signs, achieving 95.66% of mAP on average. In addition, the proposed method is able to detect traffic signs with an average precision, recall and F1-score of about 94%, 91% and 93%, respectively. The experiments surprisingly show that detectors can be trained with simple data generation methods and without problem domain data for the background, which is in the opposite direction of the common sense for deep learning

    ELVIS: Entertainment-led video summaries

    Get PDF
    © ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Multimedia Computing, Communications, and Applications, 6(3): Article no. 17 (2010) http://doi.acm.org/10.1145/1823746.1823751Video summaries present the user with a condensed and succinct representation of the content of a video stream. Usually this is achieved by attaching degrees of importance to low-level image, audio and text features. However, video content elicits strong and measurable physiological responses in the user, which are potentially rich indicators of what video content is memorable to or emotionally engaging for an individual user. This article proposes a technique that exploits such physiological responses to a given video stream by a given user to produce Entertainment-Led VIdeo Summaries (ELVIS). ELVIS is made up of five analysis phases which correspond to the analyses of five physiological response measures: electro-dermal response (EDR), heart rate (HR), blood volume pulse (BVP), respiration rate (RR), and respiration amplitude (RA). Through these analyses, the temporal locations of the most entertaining video subsegments, as they occur within the video stream as a whole, are automatically identified. The effectiveness of the ELVIS technique is verified through a statistical analysis of data collected during a set of user trials. Our results show that ELVIS is more consistent than RANDOM, EDR, HR, BVP, RR and RA selections in identifying the most entertaining video subsegments for content in the comedy, horror/comedy, and horror genres. Subjective user reports also reveal that ELVIS video summaries are comparatively easy to understand, enjoyable, and informative

    Spin-orbit readout using thin films of topological insulator Sb2Te3 deposited by industrial magnetron sputtering

    Full text link
    Driving a spin-logic circuit requires the production of a large output signal by spin-charge interconversion in spin-orbit readout devices. This should be possible by using topological insulators, which are known for their high spin-charge interconversion efficiency. However, high-quality topological insulators have so far only been obtained on a small scale, or with large scale deposition techniques which are not compatible with conventional industrial deposition processes. The nanopatterning and electrical spin injection into these materials has also proven difficult due to their fragile structure and low spin conductance. We present the fabrication of a spin-orbit readout device from the topological insulator Sb2Te3 deposited by large-scale industrial magnetron sputtering on SiO2. Despite a modification of the Sb2Te3 layer structural properties during the device nanofabrication, we measured a sizeable output voltage that can be unambiguously ascribed to a spin-charge interconversion process

    FILTWAM and Voice Emotion Recognition

    Get PDF
    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone data for a real-time and adequate interpretation of vocal expressions into emotional states were the software is calibrated with end users. FILTWAM already incorporates a valid face emotion recognition module and is extended with a voice emotion recognition module. This extension aims to provide relevant and timely feedback based upon learner's vocal intonations. The feedback is expected to enhance learner’s awareness of his or her own behavior. Six test persons received the same computer-based tasks in which they were requested to mimic specific vocal expressions. Each test person mimicked 82 emotions, which led to a dataset of 492 emotions. All sessions were recorded on video. An overall accuracy of our software based on the requested emotions and the recognized emotions is a pretty good 74.6% for the emotions happy and neutral emotions; but will be improved for the lower values of an extended set of emotions. In contrast with existing software our solution allows to continuously and unobtrusively monitor learners’ intonations and convert these intonations into emotional states. This paves the way for enhancing the quality and efficacy of game-based learning by including the learner's emotional states, and links these to pedagogical scaffolding.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University of the Netherlands

    Intrinsically determined cell death of developing cortical interneurons

    Get PDF
    Cortical inhibitory circuits are formed by GABAergic interneurons, a cell population that originates far from the cerebral cortex in the embryonic ventral forebrain. Given their distant developmental origins, it is intriguing how the number of cortical interneurons is ultimately determined. One possibility, suggested by the neurotrophic hypothesis1-5, is that cortical interneurons are overproduced, and then following their migration into cortex, excess interneurons are eliminated through a competition for extrinsically derived trophic signals. Here we have characterized the developmental cell death of mouse cortical interneurons in vivo, in vitro, and following transplantation. We found that 40% of developing cortical interneurons were eliminated through Bax- (Bcl-2 associated X-) dependent apoptosis during postnatal life. When cultured in vitro or transplanted into the cortex, interneuron precursors died at a cellular age similar to that at which endogenous interneurons died during normal development. Remarkably, over transplant sizes that varied 200-fold, a constant fraction of the transplanted population underwent cell death. The death of transplanted neurons was not affected by the cell-autonomous disruption of TrkB (tropomyosin kinase receptor B), the main neurotrophin receptor expressed by central nervous system (CNS) neurons6-8. Transplantation expanded the cortical interneuron population by up to 35%, but the frequency of inhibitory synaptic events did not scale with the number of transplanted interneurons. Together, our findings indicate that interneuron cell death is intrinsically determined, either cell-autonomously, or through a population-autonomous competition for survival signals derived from other interneurons

    A 2-Secure Code with Efficient Tracing Algorithm

    Full text link
    A 2-secure code with efficient tracing algorith
    • 

    corecore